164 research outputs found

    Haptic search with finger movements: using more fingers does not necessarily reduce search times

    Get PDF
    Two haptic serial search tasks were used to investigate how the separations between items, and the number of fingers used to scan them, influence the search time and search strategy. In both tasks participants had to search for a target (cross) between a fixed number of non-targets (circles). The items were placed in a straight line. The target’s position was varied within blocks, and inter-item separation was varied between blocks. In the first experiment participants used their index finger to scan the display. As expected, search time depended on target position as well as on item separation. For larger separations participants’ movements were jerky, resembling ‘saccades’ and ‘fixations’, while for the shortest separation the movements were smooth. When only considering time in contact with an item, search times were the same for all separation conditions. Furthermore, participants never continued their movement after they encountered the target. These results suggest that participants did not use the time during which they were moving between the items to process information about the items. The search times were a little shorter than those in a static search experiment (Overvliet et al. in Percept Psychophys, 2007a), where multiple items were presented to the fingertips simultaneously. To investigate whether this is because the finger was moving or because only one finger was stimulated, we conducted a second experiment in which we asked participants to put three fingers in line and use them together to scan the items. Doing so increased the time in contact with the items for all separations, so search times were presumably longer in the static search experiment because multiple fingers were involved. This may be caused by the time that it takes to switch from one finger to the other

    A hierarchical sensorimotor control framework for human-in-the-loop robotic hands.

    Get PDF
    Human manual dexterity relies critically on touch. Robotic and prosthetic hands are much less dexterous and make little use of the many tactile sensors available. We propose a framework modeled on the hierarchical sensorimotor controllers of the nervous system to link sensing to action in human-in-the-loop, haptically enabled, artificial hands

    Serial search for fingers of the same hand but not for fingers of different hands

    Get PDF
    In most haptic search tasks, tactile stimuli are presented to the fingers of both hands. In such tasks, the search pattern for some object features, such as the shape of raised line symbols, has been found to be serial. The question is whether this search is serial over all fingers irrespective of the hand, or whether it is serial over the fingers of each hand and parallel over the two hands. To investigate this issue, we determined the speed of static haptic search when two items are presented to two fingers of the same hand and when two items are presented to two fingers of different hands. We compared the results with predictions for parallel and serial search based on the results of a previous study using the same items and a similar task. The results indicate that two fingers of the same hand process information in a serial manner, while two fingers of two different hands process information in parallel. Thus, considering the individual fingers as independent units in haptic search may not be justified, because the hand that they belong to matters. © 2009 Springer-Verlag

    Tactual perception: a review of experimental variables and procedures

    Get PDF
    This paper reviews literature on tactual perception. Throughout this review we will highlight some of the most relevant variables in touch literature: interaction between touch and other senses; type of stimuli, from abstract stimuli such as vibrations, to two- and three-dimensional stimuli, also considering concrete stimuli such as the relation between familiar and unfamiliar stimuli or the haptic perception of faces; type of participants, separating studies with blind participants, studies with children and adults, and an analysis of sex differences in performance; and finally, type of tactile exploration, considering conditions of active and passive touch, the relevance of movement in touch and the relation between exploration and time. This review intends to present an organised overview of the main variables in touch experiments, attending to the main findings described in literature, to guide the design of future works on tactual perception and memory.This work was funded by the Portuguese “Foundation for Science and Technology” through PhD scholarship SFRH/BD/35918/2007

    The development of path integration: combining estimations of distance and heading

    Get PDF
    Efficient daily navigation is underpinned by path integration, the mechanism by which we use self-movement information to update our position in space. This process is well-understood in adulthood, but there has been relatively little study of path integration in childhood, leading to an underrepresentation in accounts of navigational development. Previous research has shown that calculation of distance and heading both tend to be less accurate in children as they are in adults, although there have been no studies of the combined calculation of distance and heading that typifies naturalistic path integration. In the present study 5-year-olds and 7-year-olds took part in a triangle-completion task, where they were required to return to the startpoint of a multi-element path using only idiothetic information. Performance was compared to a sample of adult participants, who were found to be more accurate than children on measures of landing error, heading error, and distance error. 7-year-olds were significantly more accurate than 5-year-olds on measures of landing error and heading error, although the difference between groups was much smaller for distance error. All measures were reliably correlated with age, demonstrating a clear development of path integration abilities within the age range tested. Taken together, these data make a strong case for the inclusion of path integration within developmental models of spatial navigational processing

    The Grasping Side of Odours

    Get PDF
    Background: Research on multisensory integration during natural tasks such as reach-to-grasp is still in its infancy. Crossmodal links between vision, proprioception and audition have been identified, but how olfaction contributes to plan and control reach-to-grasp movements has not been decisively shown. We used kinematics to explicitly test the influence of olfactory stimuli on reach-to-grasp movements. Methodology/Principal Findings: Subjects were requested to reach towards and grasp a small or a large visual target (i.e., precision grip, involving the opposition of index finger and thumb for a small size target and a power grip, involving the flexion of all digits around the object for a large target) in the absence or in the presence of an odour evoking either a small or a large object that if grasped would require a precision grip and a whole hand grasp, respectively. When the type of grasp evoked by the odour did not coincide with that for the visual target, interference effects were evident on the kinematics of hand shaping and the level of synergies amongst fingers decreased. When the visual target and the object evoked by the odour required the same type of grasp, facilitation emerged and the intrinsic relations amongst individual fingers were maintained. Conclusions/Significance: This study demonstrates that olfactory information contains highly detailed information able to elicit the planning for a reach-to-grasp movement suited to interact with the evoked object. The findings offer a substantia

    Integration and disruption effects of shape and texture in haptic search

    Get PDF
    In a search task, where one has to search for the presence of a target among distractors, the target is sometimes easily found, whereas in other searches it is much harder to find. The performance in a search task is influenced by the identity of the target, the identity of the distractors and the differences between the two. In this study, these factors were manipulated by varying the target and distractors in shape (cube or sphere) and roughness (rough or smooth) in a haptic search task. Participants had to grasp a bundle of items and determine as fast as possible whether a predefined target was present or not. It was found that roughness and edges were relatively salient features and the search for the presence of these features was faster than for their absence. If the task was easy, the addition of these features could also disrupt performance, even if they were irrelevant for the search task. Another important finding was that the search for a target that differed in two properties from the distractors was faster than a task with only a single property difference, although this was only found if the two target properties were non-salient. This means that shape and texture can be effectively integrated. Finally, it was found that edges are more beneficial to a search task than disrupting, whereas for roughness this was the other way round

    Cross-Modal Object Recognition Is Viewpoint-Independent

    Get PDF
    BACKGROUND: Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. METHODOLOGY/PRINCIPAL FINDINGS: Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. CONCLUSIONS/SIGNIFICANCE: The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch
    corecore